326 research outputs found

    Strategic manipulations and collusions in Knaster procedure: a comment

    Get PDF
    The note examines the susceptibility of envy-free variants of Knaster procedure to manipulations and collusions .Steinhaus-Knaster procedure; auction; insincere bidding.

    Consensus, contagion and clustering in a space-time model of public opinion formation

    Get PDF
    We study a simple model of public opinion formation that posits that interaction between neighbouring agents leads to bandwagons in the dynamics of individual opinions, as well as in that of the aggregate process. We show that in different specifications of the model, there is a tendency for the process to show consensus on one of the two competing opinions. We show how a publicly available poll of current public opinion may lead to a form of contagion, by which public opinion tends to agree with the poll. We point out that, in the absence of a poll, the process displays the feature that, after long time spans, a sequence of states occur which, when viewed locally, remain almost stationary and are characterized by large clusters of individuals of the same opinion. The running metaphor we use is that of a model of pre-electoral public opinion formation, with two candidates running. We provide some heuristic considerations on the implication that these findings could have in terms of space-time allocation of fundings in an electoral campaign.

    A simple locally interactive model of ergodic and nonergodic growth

    Get PDF
    In this paper we propose a locally interactive model which explains both the cross sectional dynamics as well as the possibility of multiple long run equilibria. Firms can choose between two technologies say 1 and 0; the returns from technology 1 are affected by the number of neighboring firms using it; the returns from technology 0 are independent of neighboring firms technological choices. Durlauf (1993) explains nonergodic growth via strong technological complementarities. By modeling in a different way the transmission of the spillover effects, we show that in presence of technological complementarities of intermediate strength we have either two or infinitely many long run equilibria. The basin of attraction of these equilibria depend on the initial conditions. On the other hand when the technological complementarities are either very weak or very strong then we have a unique long run equilibrium. As for the dynamic behavior, we shall explain the formation of large connected areas, clusters. As the cluster size grows at a rate slower than t, such areas seem to be stationary along the dynamics.

    Testing and Modelling Market Microstructure Effects with an Application to the Dow Jones Industrial Average

    Get PDF
    It is a well accepted fact that stock returns data are often characterized by market microstructure effects, such as bid-ask spreads, liquidity ratios, turnover and asymmetric information. This is particularly relevant when dealing with high frequency data, which are often used to compute model free measures of volatility, such as realized volatility. In this paper we suggest two test statistics. The first is used to test the null hypothesis of no microstructure noise. If the null is rejected, we proceed to perform a test for the hypothesis that the microstructure noise variance is independent of the sampling frequency at which the data are recorded. We provide empirical evidence based on the Dow Jones Industrial Average, for the period 1997-2002. Our findings suggest that, while the presence of microstructure induces a severe bias when using high frequency data, such a bias grows less than linearly in the number of intraday observations.bipower variation, market microstructure, realized volatility

    Predictive density construction and accuracy testing with multiple possibly misspecified diffusion models

    Get PDF
    This paper develops tests for comparing the accuracy of predictive densities derived from (possibly misspecified) diffusion models. In particular, the authors first outline a simple simulation-based framework for constructing predictive densities for one-factor and stochastic volatility models. Then, they construct accuracy assessment tests that are in the spirit of Diebold and Mariano (1995) and White (2000). In order to establish the asymptotic properties of their tests, the authors also develop a recursive variant of the nonparametric simulated maximum likelihood estimator of Fermanian and Salanié (2004). In an empirical illustration, the predictive densities from several models of the one-month federal funds rates are compared.Econometric models - Evaluation ; Stochastic analysis

    Bootstrap Specification Tests with Dependent Observations and Parameter Estimation Error

    Get PDF
    This paper introduces a parametric specification test for dissusion processes which is based on a bootstrap procedure that accounts for data dependence and parameter estimation error. The proposed bootstrap procedure additionally leads to straightforward generalizations of the conditional Kolmogorov test of Andrews (1997) and the conditional mean test of Whang (2000) to the case of dependent observations. The bootstrap hinges on a twofold extension of the Politis and Romano (1994) stationary bootstrap. First we provide an empirical process version of this bootstrap, and second, we account for parameter estimation error. One important feature of this new bootstrap is that one need not specify the conditional distribution given the entire history of the process when forming conditional Kolmogorov tests. Hence, the bootstrap, when used to extend Andrews (1997) conditional Kolmogorov test to the case of data dependence, allows for dynamic misspecification under both hypotheses. An example based on a version of the Cox, Ingersol and Ross square root process is outlined and related Monte Carlo experiments are carried out. These experiments suggest that the boostrap has excellent finite sample properties, even for samples as small as 500 observations when tests are formed using critical values constructed with as few as 100 bootstrap replications. .Diffusion process, parameter estimation error, specification test, stationary bootstrap.

    A Randomized Procedure for Choosing Data Transformation

    Get PDF
    Standard unit root and stationarity tests (see e.g. Dickey and Fuller (1979)) assume linearity under both the null and the alternative hypothesis. Violation of this linearity assumption can result in severe size and power distortion, both in finite and large samples. Thus, it is reasonable to address the problem of data transformation before running a unit root test. In this paper we propose a simple randomized procedure, coupled with sample conditioning, for choosing between levels and log-levels specifications in the presence of deterministic and/or stochastic trends. In particular, we add a randomized component to a basic test statistic, proceed by conditioning on the sample, and show that for all samples except a set of measure zero, the statistic has a X2 limiting distribution under the null hypothesis (log linearity), while it diverges under the alternative hypothesis (level linearity). Once we have chosen the proper data transformation, we remain with the standard problem of testing for a unit root, either in levels or in logs. Monte Carlo findings suggest that the proposed test has good finite sample properties for samples of at least 300 observations. In addition, an examination of the King, Plosser, Stock and Watson (1991) data set is carried out, and evidence in favor of using logged data is provided.Deterministic trend, nonlinear transformation, nonstationarity, randomized procedure.

    Information in the revision process of real-time datasets

    Get PDF
    Rationality of early release data is typically tested using linear regressions. Thus, failure to reject the null does not rule out the possibility of nonlinear dependence. This paper proposes two tests which instead have power against generic nonlinear alternatives. A Monte Carlo study shows that the suggested tests have good finite sample properties. Additionally, we carry out an empirical illustration using a real-time dataset for money, output, and prices. Overall, we find strong evidence against data rationality. Interestingly, for money stock the null is not rejected by linear tests but is rejected by our tests.Real-time data

    Heap : a command for estimating discrete outcome variable models in the presence of heaping at known points

    Get PDF
    Self-reported survey data are often plagued by the presence of heaping. Accounting for this measurement error is crucial for the identification and consistent estimation of the underlying model (parameters) from such data. In this article, we introduce two commands. The first command, heapmph, estimates the parameters of a discrete-time mixed proportional hazard model with gammaunobserved heterogeneity, allowing for fixed and individual-specific censoring and different-sized heap points. The second command, heapop, extends the framework to ordered choice outcomes, subject to heaping. We also provide suitable specification tests
    corecore